A first order method for finding minimal norm-like solutions of convex optimization problems

نویسندگان

  • Amir Beck
  • Shoham Sabach
چکیده

We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set which is by itself an optimal set of another convex problem. We introduce a gradient-based method, called the minimal norm gradient method, for solving this class of problems, and establish the convergence of the sequence generated by the algorithm as well as a rate of convergence of the sequence of function values. A portfolio optimization example is given in order to illustrate our results.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Quasi-Normal Direction (QND) Method: An Efficient Method for Finding the Pareto Frontier in Multi-Objective Optimization Problems

In managerial and economic applications, there appear problems in which the goal is to simultaneously optimize several criteria functions (CFs). However, since the CFs are in conflict with each other in such cases, there is not a feasible point available at which all CFs could be optimized simultaneously. Thus, in such cases, a set of points, referred to as 'non-dominate' points (NDPs), will be...

متن کامل

Existence and approximation of solutions for Fredholm equations of the first kind with applications to a linear moment problem

The Cimmino algorithm is an interative projection method for finding almost common points of measurable families of closed convex sets in a Hilbert space. When applied to Fredholm equations of the first kind the Cimmino algorithm produces weak approximations of solutions provided that solutions exist. We show that for consistent Fredholm equations of the first kind whose data satisfy some spect...

متن کامل

An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems

Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...

متن کامل

Convex Optimization without Projection Steps

We study the general problem of minimizing a convex function over a compact convex domain. We will investigate a simple iterative approximation algorithm based on the method by Frank & Wolfe [FW56], that does not need projection steps in order to stay inside the optimization domain. Instead of a projection step, the linearized problem defined by a current subgradient is solved, which gives a st...

متن کامل

Best interpolation in a strip II: Reduction to unconstrained convex optimization

In this paper, we study the problem of finding a real-valued function ,f on the interval [0, I] with minimal L2 norm of the second derivative that interpolates the points (ti, y;) and satisfies e(r) 5 f’(t) 5 d(t) for t E [0, 11. The functions e and d are continuous in each interval (ti, q+l) and at tl and r, but may he discontinuous at t;. Based on an earlier paper by the first author [7] we c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Math. Program.

دوره 147  شماره 

صفحات  -

تاریخ انتشار 2014